8 research outputs found
Video QoE Estimation using Network Measurement Data
More than even before, last-mile Internet Service Providers (ISPs) need to efficiently provision and manage their networks to meet the growing demand for Internet video (expected to be 82% of the global IP traffic in 2022). This network optimization requires ISPs to have an in-depth understanding of end-user video Quality of Experience (QoE). Understanding video QoE, however, is challenging for ISPs as they generally do not have access to applications at end user devices to observe key objective metrics impacting QoE. Instead, they have to rely on measurement of network traffic to estimate objective QoE metrics and use it for troubleshooting QoE issues. However, this can be challenging for HTTP-based Adaptive Streaming (HAS) video, the de facto standard for streaming over the Internet, because of the complex relationship between the network observable metrics and the video QoE metrics. This largely results from its robustness to short-term variations in the underlying network conditions due to the use of the video buffer and bitrate adaptation. In this thesis, we develop approaches that use network measurement to infer video QoE. In developing inference approaches, we provide a toolbox of techniques suitable for a diversity of streaming contexts as well as different types of network measurement data.
We first develop two approaches for QoE estimation that model video sessions based on the network traffic dynamics of the HAS protocol under two different streaming contexts. Our first approach, MIMIC, estimates unencrypted video QoE using HTTP logs. We do a large-scale validation of MIMIC using ground truth QoE metrics from a popular video streaming service. We also deploy MIMIC in a real-world cellular network and demonstrate some preliminary use cases of QoE estimation for ISPs. Our second approach is called eMIMIC that estimates QoE metrics for encrypted video using packet-level traces. We evaluate eMIMIC using an automated experimental framework under realistic network conditions and show that it outperforms state-of-the-art QoE estimation approaches.
Finally, we develop an approach to address the scalability challenges of QoE inference. We leverage machine learning to infer QoE from coarse-granular but light-weight network data in the form of Transport Layer Security (TLS) transactions. We analyze the scalability and accuracy trade-off in using such data for inference. Our evaluation shows that that the TLS transaction data can be used for detecting video performance issues with a reasonable accuracy and significantly lower computation overhead as compared to packet-level traces.Ph.D
Estimating WebRTC Video QoE Metrics Without Using Application Headers
The increased use of video conferencing applications (VCAs) has made it
critical to understand and support end-user quality of experience (QoE) by all
stakeholders in the VCA ecosystem, especially network operators, who typically
do not have direct access to client software. Existing VCA QoE estimation
methods use passive measurements of application-level Real-time Transport
Protocol (RTP) headers. However, a network operator does not always have access
to RTP headers in all cases, particularly when VCAs use custom RTP protocols
(e.g., Zoom) or due to system constraints (e.g., legacy measurement systems).
Given this challenge, this paper considers the use of more standard features in
the network traffic, namely, IP and UDP headers, to provide per-second
estimates of key VCA QoE metrics such as frames rate and video resolution. We
develop a method that uses machine learning with a combination of flow
statistics (e.g., throughput) and features derived based on the mechanisms used
by the VCAs to fragment video frames into packets. We evaluate our method for
three prevalent VCAs running over WebRTC: Google Meet, Microsoft Teams, and
Cisco Webex. Our evaluation consists of 54,696 seconds of VCA data collected
from both (1), controlled in-lab network conditions, and (2) real-world
networks from 15 households. We show that the ML-based approach yields similar
accuracy compared to the RTP-based methods, despite using only IP/UDP data. For
instance, we can estimate FPS within 2 FPS for up to 83.05% of one-second
intervals in the real-world data, which is only 1.76% lower than using the
application-level RTP headers.Comment: 20 page
TANGO: Performance and Fault Management in Cellular Networks through Cooperation between Devices and Edge Computing Nodes
Cellular networks have become an essential part of our lives. With increasing demands on its available bandwidth, we are seeing failures and performance degradations for data and voice traffic on the rise. In this paper, we propose the view that fog computing, integrated in the edge components of cellular networks, can partially alleviate this situation. In our vision, some data gathering and data analytics capability will be developed at the edge of the cellular network and client devices and the network using this edge capability will coordinate to reduce failures and performance degradations. We also envisage proactive management of disruptions including prediction of impending events of interest (such as, congestion or call drop) and deployment of appropriate mitigation actions. We show that a simple streaming media pre-caching service built using such device-fog cooperation significantly expands the number of streaming video users that can be supported in a nominal cellular network of today
Optimal Radius for Connectivity in Duty-Cycled Wireless Sensor Networks ∗
We investigate the condition on transmission radius needed to achieve connectivity in duty-cycled wireless sensor networks (briefly, DC-WSN). First, we settle a conjecture of Das et. al. (2012) and prove that the connectivity condition on Random Geometric Graphs (RGG), given by Gupta and Kumar (1989), can be used to derive a weak sufficient condition to achieve connectivity in DC-WSN. We also present a stronger result which gives a necessary and sufficient condition for connectivity and is hence optimal. The optimality of such a radius is also tested via simulation for two specific duty-cycle schemes, called the contiguous and the random selection duty-cycle scheme
A Comparative Analysis of Ookla Speedtest and Measurement Labs Network Diagnostic Test (NDT7)
Consumers, regulators, and ISPs all use client-based "speed tests" to measure
network performance, both in single-user settings and in aggregate. Two
prevalent speed tests, Ookla's Speedtest and Measurement Lab's Network
Diagnostic Test (NDT), are often used for similar purposes, despite having
significant differences in both the test design and implementation and in the
infrastructure used to conduct measurements. In this paper, we present a
comparative evaluation of Ookla and NDT7 (the latest version of NDT), both in
controlled and wide-area settings. Our goal is to characterize when, how much
and under what circumstances these two speed tests differ, as well as what
factors contribute to the differences. To study the effects of the test design,
we conduct a series of controlled, in-lab experiments under a variety of
network conditions and usage modes (TCP congestion control, native vs. browser
client). Our results show that Ookla and NDT7 report similar speeds when the
latency between the client and server is low, but that the tools diverge when
path latency is high. To characterize the behavior of these tools in wide-area
deployment, we collect more than 40,000 pairs of Ookla and NDT7 measurements
across six months and 67 households, with a range of ISPs and speed tiers. Our
analysis demonstrates various systemic issues, including high variability in
NDT7 test results and systematically under-performing servers in the Ookla
network